I'm currently the proud owner of an 8-node cluster that won't start up.
Yesterday we had a developer doing very high volume writes to our cluster
via a Hadoop job that was reading an HDFS file and running six concurrent
mappers on each of 8 nodes and using Hector to do the load and it sort of
Is this a bug or something I am doing wrong? Can't get past this now.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Exception-when-bringing-up-nodes-during-failure-testing-tp6085692p6089344.html
Sent from the cassandra-u...@incubator.apache.org
We are seeing various other messages as well related to deserialization, so
this seems to be some random corruption somewhere, but so far it may seem to
be limited to supercolumns.
Terje
On Sat, Mar 5, 2011 at 2:26 AM, Terje Marthinussen
tmarthinus...@gmail.comwrote:
Hi,
Did you get anywhere
Hi Terje,
Can you attach the portion of your logs that shows the exceptions
indicating corruption? Which version are you on right now?
Ben
On 3/4/11 10:42 AM, Terje Marthinussen wrote:
We are seeing various other messages as well related to
deserialization, so this seems to be some random
The EOF exception looks like CASSANDRA-1992, which, if that is the
problem, will be resolved by the scrub tool in 0.7.3.
That release is being voted on right now.
HTH,
Ben
On 3/4/11 10:32 AM, Matt Kennedy wrote:
I'm currently the proud owner of an 8-node cluster that won't start up.
On Fri, Mar 4, 2011 at 10:09 AM, Roland Gude roland.g...@yoochoose.com wrote:
Hi again,
I am still suffering from this error (which severely limits testability for
me right now). Doesn't anybody have an idea, why IndexSliceQueries work if
the index is created with cassandra-cli and why it
Hi Jonathan,
as Roland is already out of office, I'd like to jump in.
Maybe this somehow got lost in the moddle of this thread, indexing works
fine in our real cassandra cluster.
For our test cases, we use an embedded cassandra instance, which is
configured via yaml.
In case indexes cannot be
On Fri, Mar 4, 2011 at 11:52 PM, Jürgen Link juergen.l...@googlemail.comwrote:
Hi Jonathan,
as Roland is already out of office, I'd like to jump in.
Maybe this somehow got lost in the moddle of this thread, indexing works
fine in our real cassandra cluster.
For our test cases, we use an
I have a small ring of cassandra nodes that have somewhat limited memory
capacity for the moment. Cassandra is eating up all the memory on these
nodes. I'm not sure where to look first in terms of reducing the foot
print. Keys cached? Compaction?
Any hints would be greatly appreciated.
On 03/04/2011 01:53 PM, Casey Deccio wrote:
I have a small ring of cassandra nodes that have somewhat limited memory
capacity for the moment. Cassandra is eating up all the memory on these
nodes. I'm not sure where to look first in terms of reducing the foot
print. Keys cached? Compaction?
Apologies A. J. -- the reference to rack.properties is an error in the
DataStax docs. We'll update it ASAP.
On Thu, Mar 3, 2011 at 10:56 AM, A J s5a...@gmail.com wrote:
Yes, that has topology and not rack.
conf/access.properties conf/log4j-server.properties
Other than adding more memory to the machine is there a way to solve
this? Please help. Thanks
ERROR [COMPACTION-POOL:1] 2011-03-04 11:11:44,891 CassandraDaemon.java
(line org.apache.cassandra.thrift.CassandraDaemon$1) Uncaught exception
in thread Thread[COMPACTION-POOL:1,5,main]
- Does this occur only during compaction or at seemingly random times?
- How large is your heap? What jvm settings are you using? How much
physical RAM do you have?
- Do you have the row and/or key cache enabled? How are they
configured? How large are they when the OOM is thrown?
On 03/04/2011
See also:
http://www.datastax.com/docs/0.7/troubleshooting/index#nodes-are-dying-with-oom-errors
On 03/04/2011 03:05 PM, Chris Burroughs wrote:
- Does this occur only during compaction or at seemingly random times?
- How large is your heap? What jvm settings are you using? How much
physical
- Are you using a key cache? How many keys do you have? Across how
many column families
You configuration is unusual both in terms of not setting min heap ==
max heap and the percentage of available RAM used for the heap. Did you
change the heap size in response to errors or for another
On Fri, Mar 4, 2011 at 11:03 AM, Chris Burroughs
chris.burrou...@gmail.comwrote:
What do you mean by eating up the memory? Resident set size, low
memory available to page cache, excessive gc of the jvm's heap?
jvm's heap is set for half of the physical memory (1982 MB out of 4G), and
jsvc is
I have been through tuning for GC and OOM recently. If you can provide the
cassandra.yaml, I can help. Mostly I had to play with memtable thresholds.
Thanks,
Naren
On Fri, Mar 4, 2011 at 12:43 PM, Mark static.void@gmail.com wrote:
We have 7 column families and we are not using the default
It's only been a couple of weeks since the last release, but a rather
nasty bug (some details here[1]) has since been fixed, and it seemed
best to get that out to folks sooner rather than later.
The issue in question is well explained in the release notes[3], but the
TL;DR is that users of 0.7.1
Thats very nice of you. Thanks
Storage
ClusterNameMyCluster/ClusterName
AutoBootstraptrue/AutoBootstrap
HintedHandoffEnabledtrue/HintedHandoffEnabled
IndexInterval128/IndexInterval
Keyspaces
Keyspace Name=MyCompany
ColumnFamily Name=SearchLog
ColumnType=Super
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 03/04/2011 06:13 PM, user-h...@cassandra.apache.org wrote:
Hi! This is the ezmlm program. I'm managing the
user@cassandra.apache.org mailing list.
To confirm that you would like
cm...@thecommandline.net
removed from the user mailing
20 matches
Mail list logo